We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Modern neural networks are over-parameterized and thus rely on strong regularization such as data augmentation and weight decay to reduce overfitting and improve generalization. The dominant form of data augmentation applies invariant transforms, where the learning target of a sample is invariant to the transform applied to that sample. We draw inspiration from human visual classification studies and propose generalizing augmentation with invariant transforms to soft augmentation where the learning target softens non-linearly as a function of the degree of the transform applied to the sample: e.g., more aggressive image crop augmentations produce less confident learning targets. We demonstrate that soft targets allow for more aggressive data augmentation, offer more robust performance boosts, work with other augmentation policies, and interestingly, produce better calibrated models (since they are trained to be less confident on aggressively cropped/occluded examples). Combined with existing aggressive augmentation strategies, soft target 1) doubles the top-1 accuracy boost across Cifar-10, Cifar-100, ImageNet-1K, and ImageNet-V2, 2) improves model occlusion performance by up to $4\times$, and 3) halves the expected calibration error (ECE). Finally, we show that soft augmentation generalizes to self-supervised classification tasks.
translated by 谷歌翻译
We tackle the problem of novel class discovery and localization (NCDL). In this setting, we assume a source dataset with supervision for only some object classes. Instances of other classes need to be discovered, classified, and localized automatically based on visual similarity without any human supervision. To tackle NCDL, we propose a two-stage object detection network Region-based NCDL (RNCDL) that uses a region proposal network to localize regions of interest (RoIs). We then train our network to learn to classify each RoI, either as one of the known classes, seen in the source dataset, or one of the novel classes, with a long-tail distribution constraint on the class assignments, reflecting the natural frequency of classes in the real world. By training our detection network with this objective in an end-to-end manner, it learns to classify all region proposals for a large variety of classes, including those not part of the labeled object class vocabulary. Our experiments conducted using COCO and LVIS datasets reveal that our method is significantly more effective than multi-stage pipelines that rely on traditional clustering algorithms. Furthermore, we demonstrate the generality of our approach by applying our method to a large-scale Visual Genome dataset, where our network successfully learns to detect various semantic classes without direct supervision.
translated by 谷歌翻译
Lifelong learners must recognize concept vocabularies that evolve over time. A common yet underexplored scenario is learning with class labels over time that refine/expand old classes. For example, humans learn to recognize ${\tt dog}$ before dog breeds. In practical settings, dataset $\textit{versioning}$ often introduces refinement to ontologies, such as autonomous vehicle benchmarks that refine a previous ${\tt vehicle}$ class into ${\tt school-bus}$ as autonomous operations expand to new cities. This paper formalizes a protocol for studying the problem of $\textit{Learning with Evolving Class Ontology}$ (LECO). LECO requires learning classifiers in distinct time periods (TPs); each TP introduces a new ontology of "fine" labels that refines old ontologies of "coarse" labels (e.g., dog breeds that refine the previous ${\tt dog}$). LECO explores such questions as whether to annotate new data or relabel the old, how to leverage coarse labels, and whether to finetune the previous TP's model or train from scratch. To answer these questions, we leverage insights from related problems such as class-incremental learning. We validate them under the LECO protocol through the lens of image classification (CIFAR and iNaturalist) and semantic segmentation (Mapillary). Our experiments lead to surprising conclusions; while the current status quo is to relabel existing datasets with new ontologies (such as COCO-to-LVIS or Mapillary1.2-to-2.0), LECO demonstrates that a far better strategy is to annotate $\textit{new}$ data with the new ontology. However, this produces an aggregate dataset with inconsistent old-vs-new labels, complicating learning. To address this challenge, we adopt methods from semi-supervised and partial-label learning. Such strategies can surprisingly be made near-optimal, approaching an "oracle" that learns on the aggregate dataset exhaustively labeled with the newest ontology.
translated by 谷歌翻译
多个现有基准测试涉及视频中的跟踪和分割对象,例如,视频对象细分(VOS)和多对象跟踪和分割(MOTS)(MOTS),但是由于使用不同的基准标准数据集和指标,它们之间几乎没有相互作用(例如J&F,J&F,J&F,J&F,地图,smotsa)。结果,已发表的作品通常针对特定的基准,并且不容易相互媲美。我们认为,可以解决多个任务的广义方法的发展需要在这些研究子社区中更大的凝聚力。在本文中,我们旨在通过提出爆发来促进这一点,该数据集包含数千个带有高质量对象掩码的视频,以及一个相关的基准标准,其中包含六个任务,涉及视频中的对象跟踪和细分。使用相同的数据和可比较的指标对所有任务进行评估,这使研究人员能够一致考虑它们,因此更有效地从不同任务的不同方法中汇集了知识。此外,我们为所有任务展示了几个基线,并证明可以将一个任务的方法应用于另一个任务,并具有可量化且可解释的性能差异。数据集注释和评估代码可在以下网址获得:https://github.com/ali2500/burst-benchmark。
translated by 谷歌翻译
我们描述了一种数据驱动的方法,用于指定任意对象的多个图像,以推断相机观点。该任务是经典几何管道(例如SFM和SLAM)的核心组成部分,也是当代神经方法(例如NERF)的至关重要的预处理要求,以对象重建和视图合成。与现有的对应驱动的方法相反,鉴于稀疏视图的表现不佳,我们提出了一种基于自上而下的预测方法来估计相机观点。我们的主要技术见解是使用基于能量的公式来表示相对摄像机旋转的分布,从而使我们能够明确表示由对象对称或视图引起的多个摄像机模式。利用这些相对预测,我们共同估计了来自多个图像的一致摄像机旋转集。我们表明,我们的方法优于最先进的SFM和SLAM方法,并且在可见和看不见的类别上都稀疏图像。此外,我们的概率方法显着优于直接回归相对姿势的表现,这表明对多模型建模对于相干关节重建很重要。我们证明,我们的系统可以是从多视图数据集中进行野外重建的垫脚石。可以在https://jasonyzhang.com/relpose上找到带有代码和视频的项目页面。
translated by 谷歌翻译
当代人工神经网络(ANN)是经过训练的端到端,共同学习功能和分类器以完成感兴趣的任务。尽管非常有效,但这种范式在组装带注释的特定任务数据集和培训大规模网络方面施加了巨大的成本。我们建议通过引入视觉生物标志物分类的辅助预任务来将特征从下游肺超声任务中学习。我们证明,通过培训模型来预测生物标记标签,可以从超声视频中学习一个内容丰富,简洁和可解释的功能空间。值得注意的是,可以从弱视频尺度监督注释的数据中培训生物标志物功能提取器。这些功能可以由针对各种临床任务的各种下游专家模型(诊断,肺严重程度,S/F比)使用。至关重要的是,特定于任务的专家模型的准确性与直接训练此类目标任务的端到端模型相当,同时训练成本大大降低。
translated by 谷歌翻译
由于其在建模复杂操作方面的性能和灵活性,变压器在计算机视觉中变得普遍。特别重要的是“交叉注意”操作,它允许通过参与任意大小的输入功能集来学习一个向量表示(例如,图像中的对象)。最近,提出了“掩盖注意力”,其中给定的对象表示仅关注那些对象的分割掩码处于活动状态的图像像素功能。这种注意力的专业证明对各种图像和视频细分任务有益。在本文中,我们提出了另一种专业化的注意力,该专业能够通过“软遮罩”(具有连续遮罩概率而不是二进制值的那些软遮罩)参加,并且通过这些掩码概率也可以差异化,从而允许学习掩模用于注意的掩模。在网络中无需直接损失监督。这对于多种应用程序可能很有用。具体而言,我们对弱监督视频对象细分(VOS)的任务采用了“可区分的软掩盖注意力”,在该任务中,我们为VOS开发了一个基于变压器的网络,该网络仅需要单个带注释的图像框架,但也可以仅带有一个带注释的框架的视频中的循环一致性培训受益。尽管没有标记的框架中的口罩没有损失,但由于我们的新型注意力表述,该网络仍然能够在这些框架中细分对象。代码:https://github.com/ali2500/hodor/blob/main/main/hodor/modelling/encoder/soft_masked_attention.py
translated by 谷歌翻译
持续学习(CL)被广泛认为是终身AI的关键挑战。但是,现有的CLENG分类,例如置换式和拆分式剪裁,利用人工时间变化,不与现实世界一致或不一致。在本文中,我们介绍了Clear,这是第一个连续的图像分类基准数据集,其在现实世界中具有自然的视觉概念的时间演变,它跨越了十年(2004-2014)。我们通过现有的大规模图像集(YFCC100M)清楚地清楚地通过一种新颖且可扩展的低成本方法来进行粘性语言数据集策划。我们的管道利用了预处理的视觉语言模型(例如剪辑)来互动地构建标记的数据集,这些数据集通过众包进一步验证以删除错误甚至不适当的图像(隐藏在原始YFCC100M中)。在先前的CLENMACK上,明确的主要优势是具有现实世界图像的视觉概念的平滑时间演变,包括每个时间段的高质量标记数据以及丰富的未标记样本,用于连续半惯用的学习。我们发现,一个简单的无监督预训练步骤已经可以提高只能利用完全监督数据的最新CL算法。我们的分析还表明,主流CL评估方案训练和测试IID数据人为膨胀CL系统的性能。为了解决这个问题,我们为CL提出了新颖的“流”协议,该协议始终在(近)未来测试。有趣的是,流媒体协议(a)可以简化数据集策划,因为当今的测试集可以重新用于明天的火车集,并且(b)可以生成更具概括性的模型,具有更准确的性能估算,因为每个时间段的所有标记数据都用于培训和培训,并且测试(与经典的IID火车测试拆分不同)。
translated by 谷歌翻译
铰接式3D形状重建的事先工作通常依赖于专用传感器(例如,同步的多摄像机系统)或预先构建的3D可变形模型(例如,Smal或SMPL)。这些方法无法在野外扩展到不同的各种物体。我们呈现Banmo,这是一种需要专用传感器的方法,也不需要预定义的模板形状。 Banmo在可怜的渲染框架中从许多单眼休闲视频中建立高保真,铰接式的3D模型(包括形状和动画皮肤的重量)。虽然许多视频的使用提供了更多的相机视图和对象关节的覆盖范围,但它们在建立不同背景,照明条件等方面建立了重大挑战。我们的主要洞察力是合并三所思想学校; (1)使用铰接骨骼和混合皮肤的经典可变形形状模型,(2)可容纳基于梯度的优化,(3)在像素之间产生对应关系的规范嵌入物模型。我们介绍了神经混合皮肤模型,可允许可微分和可逆的铰接变形。与规范嵌入式结合时,这些模型允许我们在跨越可通过循环一致性自我监督的视频中建立密集的对应。在真实和合成的数据集上,Banmo显示比人类和动物的先前工作更高保真3D重建,具有从新颖的观点和姿势的现实图像。项目网页:Banmo-www.github.io。
translated by 谷歌翻译